- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
0004000000000000
- More
- Availability
-
40
- Author / Contributor
- Filter by Author / Creator
-
-
Pang, Richard Yuanzhe (4)
-
Chen, Angelica (3)
-
He, He (3)
-
Nangia, Nikita (3)
-
Parrish, Alicia (3)
-
Phang, Jason (3)
-
Bowman, Samuel R. (2)
-
Joshi, Nitish (2)
-
Ma, Johnny (2)
-
Padmakumar, Vishakh (2)
-
Bowman, Samuel (1)
-
Cho, Kyunghyun (1)
-
Holtzman, Ari (1)
-
Madaan, Divyam (1)
-
Michael, Julian (1)
-
Mueller, Aaron (1)
-
Thompson, Jana (1)
-
Thompson, Jana. (1)
-
Wang, Alex (1)
-
#Tyler Phillips, Kenneth E. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Noisy channel models have been especially effective in neural machine translation (NMT). However, recent approaches like "beam search and rerank" (BSR) incur significant computation overhead during inference, making real-world application infeasible. We aim to study if it is possible to build an amortized noisy channel NMT model such that when we do greedy decoding during inference, the translation accuracy matches that of BSR in terms of reward (based on the source-to-target log probability and the target-to-source log probability) and quality (based on BLEU and BLEURT). We attempt three approaches to train the new model: knowledge distillation, one-step-deviation imitation learning, and Q learning. The first approach obtains the noisy channel signal from a pseudo-corpus, and the latter two approaches aim to optimize toward a noisy-channel MT reward directly. For all three approaches, the generated translations fail to achieve rewards comparable to BSR, but the translation quality approximated by BLEU and BLEURT is similar to the quality of BSR-produced translations. Additionally, all three approaches speed up inference by 1-2 orders of magnitude.more » « less
-
Michael, Julian; Holtzman, Ari; Parrish, Alicia; Mueller, Aaron; Wang, Alex; Chen, Angelica; Madaan, Divyam; Nangia, Nikita; Pang, Richard Yuanzhe; Phang, Jason; et al (, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers))
-
Bowman, Samuel R.; Chen, Angelica; He, He; Joshi, Nitish; Ma, Johnny; Nangia, Nikita; Padmakumar, Vishakh; Pang, Richard Yuanzhe; Parrish, Alicia; Phang, Jason; et al (, NAACL 2022)To enable building and testing models on long-document comprehension, we introduce QuALITY, a multiple-choice QA dataset with context passages in English that have an average length of about 5,000 tokens, much longer than typical current models can process. Unlike in prior work with passages, our questions are written and validated by contributors who have read the entire passage, rather than relying on summaries or excerpts. In addition, only half of the questions are answerable by annotators working under tight time constraints, indicating that skimming and simple search are not enough to consistently perform well. Our baseline models perform poorly on this task (55.4%) and significantly lag behind human performance (93.5%).more » « less
-
Pang, Richard Yuanzhe; Parrish, Alicia; Joshi, Nitish; Nangia, Nikita; Phang, Jason; Chen, Angelica; Padmakumar, Vishakh; Ma, Johnny; Thompson, Jana; He, He; et al (, Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies)
An official website of the United States government

Full Text Available